Quick guides
English
Index
Log in to the cluster ↩
The accounts are for personal and non-transferable usage. If the project requires someone else's access to the machine or any increase in the assigned resources, the project manager will be in charge to make this type of request.
Login nodes ↩
Cluster | |
---|---|
MareNostrum 4 (GP) | mn1.bsc.es mn2.bsc.es mn3.bsc.es |
CTE-AMD | amdlogin1.bsc.es |
CTE-POWER | plogin1.bsc.es plogin2.bsc.es |
MinoTauro | mt1.bsc.es |
All connections must be done through SSH (Secure SHell), for example:
mylaptop$> ssh {username}@mn1.bsc.es
Password change ↩
For security reasons, you must change the first password.
To change your password, you have to login at Storage (Data Transfer machine):
mylaptop$> ssh {username}@dt01.bsc.es
using the same username and password than in the cluster. Then, you have to use the 'passwd' command.
The new password will become effective after about 10 minutes.
Access from/to the outside ↩
The login nodes are the only nodes accessible from the outside, but no connections are allowed from the cluster to the outside world for security reasons.
All file transfers from/to the outside must be executed from your local machine and not within the cluster:
Example to copy files or directories from MN4 to an external machine:
mylaptop$> scp -r {username}@dt01.bsc.es:"MN4_SOURCE_dir" "mylaptop_DEST_dir"
Example to copy files or directories from an external machine to MN4:
mylaptop$> scp -r "mylaptop_SOURCE_dir" {username}@dt01.bsc.es:"MN4_DEST_dir"
Directories and file systems ↩
There are different partitions of disk space. Each area may have specific size limits and usage policies.
Basic directories under GPFS ↩
GPFS (General Parallel File System, a distributed networked filesystem) can be accessed from all the nodes and Data Transfer Machine (dt01.bsc.es).
The available GPFS directories and file systems are:
/apps
: over this filesystem reside applications and libraries already installed on the machine for everyday use. Users cannot write to it./gpfs/home
: after login, it is the default work area where users can save source codes, scripts, and other personal data. The space quota is individual (with a relatively lessened capacity). Not recommended for run jobs; please run your jobs on your group’s /gpfs/projects or /gpfs/scratch instead./gpfs/projects
: it's intended for data sharing between users of the same group or project. All members of the group share the space quota./gpfs/scratch
: each user has their directory under this partition, for example, to store temporary job files during execution. All members of the group share the space quota.
Storage space limits/quotas ↩
To check the limits of disk space, as well as the quotas of current use for each file system:
$> bsc_quota
Running jobs ↩
Submit to queues ↩
Jobs submission to the queue system have to be done through the Slurm directives, for example:
To submit a job:
$> sbatch {job_script}
To show all the submitted jobs:
$> squeue
To cancel a job:
$> scancel {job_id}
Queue limits ↩
To check the limits for the queues (QoS) assigned to the project, you can do:
$> bsc_queues
Interactive jobs ↩
Interactive sessions
Allocation of an interactive session has to be done through Slurm, for example:
To request an interactive session on a compute node:
$> salloc -n 1 -c 4 # example to request 1 task, 4 CPUs (cores) per task
To request an interactive session on a non-shared (exclusive) compute node:
$> salloc --exclusive
To request an interactive session on a compute node using GPUs:
$> salloc -c 80 --gres=gpu:2 # example to request 80 CPUs (cores) + 2 GPUs